Current Issue : January - March Volume : 2021 Issue Number : 1 Articles : 5 Articles
An important feature of blockchain technology is that all participants jointly\nmaintain transaction data and can achieve mutual trust relationships without\nintegrated control, which relies on distributed consensus algorithms. Practical\nByzantine Fault Tolerant algorithm (PBFT) is a fault-tolerant algorithm based\non state machine replication, which solves the Byzantine error, that is, the\nmalicious behavior of nodes. In PBFT, all participating nodes are divided into\nthe primary node and backup nodes. When this primary node commits evil\nor fails, it will elect a primary node again for message communication. The\ngenetic algorithm (GA) is a computer simulation study inspired by the natural\nbiological genetic evolution criterion â??natural selection, survival of the fittestâ?.\nGenetic algorithm is actually a method to find the optimal solution.\nAccording to it, the best primary node is selected in the PBFT algorithm to\nimprove consensus efficiency....
With the development of mobile edge computing (MEC), more and more intelligent services and applications based on deep\nneural networks are deployed on mobile devices to meet the diverse and personalized needs of users. Unfortunately, deploying and\ninferencing deep learning models on resource-constrained devices are challenging. The traditional cloud-based method usually\nruns the deep learning model on the cloud server. Since a large amount of input data needs to be transmitted to the server through\nWAN, it will cause a large service latency. This is unacceptable for most current latency-sensitive and computation-intensive\napplications. In this paper, we propose Cogent, an execution framework that accelerates deep neural network inference through\ndevice-edge synergy. In the Cogent framework, it is divided into two operation stages, including the automatic pruning and\npartition stage and the containerized deployment stage. Cogent uses reinforcement learning (RL) to automatically predict pruning\nand partition strategies based on feedback from the hardware configuration and system conditions so that the pruned and\npartitioned model can better adapt to the system environment and user hardware configuration. Then through containerized\ndeployment to the device and the edge server to accelerate model inference, experiments show that the learning-based hardwareaware\nautomatic pruning and partition scheme can significantly reduce the service latency, and it accelerates the overall model\ninference process while maintaining accuracy....
An iterative optimization for decoupling capacitor placement on a power delivery network\n(PDN) is presented based on Genetic Algorithm (GA) and Artificial Neural Network (ANN). TheANN\nis first trained by an appropriate set of results obtained by a commercial simulator. Once the ANN is\nready, it is used within an iterative GA process to place a minimum number of decoupling capacitors\nfor minimizing the differences between the input impedance at one or more location, and the required\ntarget impedance. The combined GAâ??ANN process is shown to effectively provide results consistent\nwith those obtained by a longer optimization based on commercial simulators. With the new approach\nthe accuracy of the results remains at the same level, but the computational time is reduced by at least\n30 times. Two test cases have been considered for validating the proposed approach, with the second\none also being compared by experimental measurements....
Depth neural network (DNN) has become a research hotspot in the field of image recognition. Developing a suitable solution to\nintroduce effective operations and layers into DNN model is of great significance to improve the performance of image and video\nrecognition. To achieve this, through making full use of block information of different sizes and scales in the image, a multiscale\npooling deep convolution neural network model is designed in this paper. No matter how large the feature map is, multiscale\nsampling layer will output three fixed-size character matrices. Experimental results demonstrate that this method greatly improves\nthe performance of the current single training image, which is suitable for solving the image generation, style migration, image\nediting, and other issues. It provides an effective solution for further industrial practice in the fields of medical image, remote\nsensing, and satellite imaging....
In the development of technology for smart cities, the installation and deployment of electronic motor vehicle registration\nidentification have attracted great attention in terms of smart transportation in recent years. Vehicle velocity measurement is one\nof the fundamental data collection efforts for motor vehicles. The velocity detection using electronic registration identification of\nmotor vehicles is constrained by the detection algorithm, the material of the automobile windshield, the placement of the decals,\nthe installation method of the signal reader, and the angle of the antenna. The software and hardware for electronic motor vehicle\nregistration identification produced in the standard manner cannot meet the accuracy of velocity detection for all scenarios. Based\non the actual application requirements, we propose a calibration method for the numerical output of the automobile velocity\ndetector based on edge computing of the optimized multiple reader/writer velocity values and based on a particle swarmoptimized\nradial basis function (RBF) neural network. The proposed method was tested on a two-way eight-lane road, and the test\nresults showed that it can effectively improve the accuracy of velocity detection using electronic registration identification of\nmotor vehicles....
Loading....